Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
1.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) ; 13741 LNCS:154-159, 2023.
Article in English | Scopus | ID: covidwho-20243449

ABSTRACT

Due to the recent COVID-19 pandemic, people tend to wear masks indoors and outdoors. Therefore, systems with face recognition, such as FaceID, showed a tendency of decline in accuracy. Consequently, many studies and research were held to improve the accuracy of the recognition system between masked faces. Most of them targeted to enhance dataset and restrained the models to get reasonable accuracies. However, not much research was held to explain the reasons for the enhancement of the accuracy. Therefore, we focused on finding an explainable reason for the improvement of the model's accuracy. First, we could see that the accuracy has actually increased after training with a masked dataset by 12.86%. Then we applied Explainable AI (XAI) to see whether the model has really focused on the regions of interest. Our approach showed through the generated heatmaps that difference in the data of the training models make difference in range of focus. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

2.
4th International Conference on Robotics, Intelligent Control and Artificial Intelligence, RICAI 2022 ; : 1185-1190, 2022.
Article in English | Scopus | ID: covidwho-2324495

ABSTRACT

Face mask image recognition can detect and monitor whether people wear the mask. Currently, the mask recognition model research mainly focuses on different mask detection systems. However, these methods have limited working datasets, do not give safety alerts, and do not work appropriately on masks. This paper aims to use the face mask recognition detection model in public places to monitor the people who do not wear the mask or the wrong mask to reduce the spread of Covid-19. The mask detection model supports transfer learning and image classification. Specifically, the collected data are first collected and then divided into two parts: with_mask and without_mask. Then authors build, implement the model, and obtain accurate mask recognition models. This paper uses and size of images datasets tested respectively. The experimental results show that the effect of the image size of was relatively better, and the training accuracy of different MobileNetV2 models is about 95%. Our analysis demonstrates that MobileNetV2 can correctly classify Covid-19. © 2022 ACM.

3.
2nd International Conference on Sustainable Computing and Data Communication Systems, ICSCDS 2023 ; : 770-773, 2023.
Article in English | Scopus | ID: covidwho-2325493

ABSTRACT

Though many facial emotion recognition models exist, after the Covid-19 pandemic, majority of such algorithms are rendered obsolete as everybody is compelled to wear a facemask to protect themselves against the deadly virus. Face masks can hinder emotion recognition systems, as crucial facial features are not visible in the image. This is because facemasks cover essential parts of the face such as the mouth, nose, and cheeks which play an important role in differentiating between various emotions. This study intends to recognize the emotional states of anger-disgust, neutral, surprise-fear, joy, sadness, of the person in the image with a face mask. In the proposed method, a CNN model is trained using images of people wearing masks. To achieve higher accuracy, the classes in the dataset are combined. Different combinations of clubbing are performed, and results are recorded. Images are taken from FER2013 dataset which consists of a huge number of manually annotated facial images of people. © 2023 IEEE.

4.
16th IEEE International Conference on Signal-Image Technology and Internet-Based Systems, SITIS 2022 ; : 184-189, 2022.
Article in English | Scopus | ID: covidwho-2317360

ABSTRACT

In this article, we tackle the recognition of faces wearing surgical masks. Surgical masks have become a necessary piece of daily apparel because of the COVID-19-related worldwide health problem. Modern face recognition models are in trouble because they were not made to function with masked faces. Furthermore, in order to stop the infection from spreading, apps capable of detecting if the individuals are wearing masks are also required. To address these issues, we present an end-to-end approach for training face recognition models based on the ArcFace architecture, including various changes to the backbone and loss computation. We also use data augmentation to generate a masked version of the original dataset and mix them on the fly while training. Without incurring any additional computational costs, we modify the chosen network to output also the likelihood of wearing a mask. Thus, the face recognition loss and the mask-usage loss are merged to create a new function known as Multi-Task ArcFace (MTArcFace). The conducted experiments demonstrate that our method outperforms the baseline model results when faces with masks are considered, while achieving similar metrics on the original dataset. In addition, it obtains a 99.78% of mean accuracy in mask-usage classification. © 2022 IEEE.

5.
Traitement du Signal ; 40(1):327-334, 2023.
Article in English | Scopus | ID: covidwho-2293378

ABSTRACT

In the current era, the Optical Character Recognition (OCR) model plays a vital role in converting images of handwritten characters or words into text editable script. During the COVID-19 pandemic, students' performance is assessed based on multiple-choice questions and handwritten answers so, in this situation, the need for handwritten recognition has become acute. Handwritten answers in any regional language need the OCR model to transform the readable machine-encoded text for automatic assessment which will reduce the burden of manual assessment. The single Convolutional Neural Network (CNN) algorithm recognizes the handwritten characters but its accuracy is suppressed when dataset volume is increased. In proposed work stacking and soft voting ensemble mechanisms that address multiple CNN models to recognize the handwritten characters. The performance of the ensemble mechanism is significantly better than the single CNN model. This proposed work ensemble VGG16, Alexnet and LeNet-5 as base classifiers using stacking and soft voting ensemble approaches. The overall accuracy of the proposed work is 98.66% when the soft voting ensemble has three CNN classifiers. © 2023 Lavoisier. All rights reserved.

6.
International Conference on IoT, Intelligent Computing and Security, IICS 2021 ; 982:117-133, 2023.
Article in English | Scopus | ID: covidwho-2297722

ABSTRACT

As COVID-19 has been constantly getting mutated and in three or four months a new variant gets introduced to us and it comes with more deadly problems. The things that prevent us from getting Covid are getting vaccinated and wearing a face mask. In this paper, we have implemented a new Face Mask Detection and Person Recognition model named Insight face which is based on SoftMax loss classification algorithm Arc Face loss and named it as Rapid Face Detection and Peron Identification Model based on Deep Neural Networks (RFMPI-DNN) to detect face mask and person identity rapidly as compared to other models available. To compare our new model, we have used previous MobileNet_V2 model and face recognition module for effective comparison on the basis of time. The proposed model implemented in the system has outperformed the model compared in this paper in every aspect. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

7.
22nd Joint European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD 2022 ; 13715 LNAI:304-320, 2023.
Article in English | Scopus | ID: covidwho-2289167

ABSTRACT

Deep learning-based facial recognition (FR) models have demonstrated state-of-the-art performance in the past few years, even when wearing protective medical face masks became commonplace during the COVID-19 pandemic. Given the outstanding performance of these models, the machine learning research community has shown increasing interest in challenging their robustness. Initially, researchers presented adversarial attacks in the digital domain, and later the attacks were transferred to the physical domain. However, in many cases, attacks in the physical domain are conspicuous, and thus may raise suspicion in real-world environments (e.g., airports). In this paper, we propose Adversarial Mask, a physical universal adversarial perturbation (UAP) against state-of-the-art FR models that is applied on face masks in the form of a carefully crafted pattern. In our experiments, we examined the transferability of our adversarial mask to a wide range of FR model architectures and datasets. In addition, we validated our adversarial mask's effectiveness in real-world experiments (CCTV use case) by printing the adversarial pattern on a fabric face mask. In these experiments, the FR system was only able to identify 3.34% of the participants wearing the mask (compared to a minimum of 83.34% with other evaluated masks). A demo of our experiments can be found at: https://youtu.be/_TXkDO5z11w. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

8.
IEEE Transactions on Biometrics, Behavior, and Identity Science ; : 1-1, 2023.
Article in English | Scopus | ID: covidwho-2286289

ABSTRACT

During COVID-19 coronavirus epidemic, almost everyone wears a mask to prevent the spread of virus. It raises a problem that the traditional face recognition model basically fails in the scene of face-based identity verification, such as security check, community visit check-in, etc. Therefore, it is imminent to boost the performance of masked face recognition. Most recent advanced face recognition methods are based on deep learning, which heavily depends on a large number of training samples. However, there are presently no publicly available masked face recognition datasets, especially real ones. To this end, this work proposes three types of masked face datasets, including Masked Face Detection Dataset (MFDD), Real-world Masked Face Recognition Dataset (RMFRD) and Synthetic Masked Face Recognition Dataset (SMFRD). Besides, we conduct benchmark experiments on these three datasets for reference. As far as we know, we are the first to publicly release large-scale masked face recognition datasets that can be downloaded for free at https://github.com/X-zhangyang/Real-World-Masked-Face-Dataset.. IEEE

9.
Smart Innovation, Systems and Technologies ; 315:339-349, 2023.
Article in English | Scopus | ID: covidwho-2239280

ABSTRACT

The digitalization of human work has been an ever-evolving process. Student's and employee's attendance systems are automated by using fingerprint biometrics. Specifically covid situation has created the need for touchless attendance system. Many institutions have already implemented a face detection-based attendance system. However, the major problem in designing face-recognising biometric applications is the scalability and accuracy in time to differentiate between multiple faces from a single clip/image. This paper used the OpenFace model for face recognition and developed a multi-face recognition model. The Torch and Python deployment module of deep neural network-based face recognition was used, and it was predicated accurately in time. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

10.
29th IEEE International Conference on Image Processing, ICIP 2022 ; : 726-730, 2022.
Article in English | Scopus | ID: covidwho-2223122

ABSTRACT

Face recognition under ideal conditions is now considered a well-solved problem with advances in deep learning. Recognizing faces under occlusion, however, still remains a challenge. Existing techniques often fail to recognize faces with both the mouth and nose covered by a mask, which is now very common under the COVID-19 pandemic. Common approaches to tackle this problem include 1) discarding information from the masked regions during recognition and 2) restoring the masked regions before recognition. Very few works considered the consistency between features extracted from masked faces and from their mask-free counterparts. This resulted in models trained for recognizing masked faces often showing degraded performance on mask-free faces. In this paper, we propose a unified framework, named Face Feature Rectification Network (FFR-Net), for recognizing both masked and mask-free faces alike. We introduce rectification blocks to rectify features extracted by a state-of-the-art recognition model, in both spatial and channel dimensions, to minimize the distance between a masked face and its mask-free counterpart in the rectified feature space. Experiments show that our unified framework can learn a rectified feature space for recognizing both masked and mask-free faces effectively, achieving state-of-the-art results. Project code: https://github.com/haoosz/FFR-Net. © 2022 IEEE.

11.
2022 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies, 3ICT 2022 ; : 598-603, 2022.
Article in English | Scopus | ID: covidwho-2213124

ABSTRACT

People's lives have been severely disrupted recently due to the COVID-19 outbreak's fast worldwide proliferation and transmission. An option for controlling the epidemic is to make individuals wear face masks in public. For such regulation, automatic and effective face detection systems are required. A facial mask recognition model for real-time video-recorded streaming is provided in this research, which categorizes the pictures as (with mask) or (without mask). A dataset from Kaggle was used to develop and assess the model. The suggested system is computationally more precise, efficient and lightweight when compared to other systems like VGG-16, DenseNet-121, and Inception-V3 which helped the developed model meet low end PC system requirements. The collected data set contains exactly 12,000 images and has a 98.1% performance training accuracy and a validation accuracy of 98.2%, which is achieved by using MobileNetV2. © 2022 IEEE.

12.
10th Workshop on the Representation and Processing of Sign Languages: Multilingual Sign Language Resources, sign-lang 2022 ; : 1-8, 2022.
Article in English | Scopus | ID: covidwho-2207393

ABSTRACT

Video-based datasets for Continuous Sign Language are scarce due to the challenging task of recording videos from native signers and the reduced number of people who can annotate sign language. COVID-19 has evidenced the key role of sign language interpreters in delivering nationwide health messages to deaf communities. In this paper, we present a framework for creating a multi-modal sign language interpretation dataset based on videos and we use it to create the first dataset for Peruvian Sign Language (LSP) interpretation annotated by hearing volunteers who have intermediate knowledge of PSL guided by the video audio. We rely on hearing people to produce a first version of the annotations, which should be reviewed by native signers in the future. Our contributions: i) we design a framework to annotate a sign Language dataset;ii) we release the first annotated LSP multi-modal interpretation dataset (AEC);iii) we evaluate the annotation done by hearing people by training a sign language recognition model. Our model reaches up to 80.3% of accuracy among a minimum of five classes (signs) AEC dataset, and 52.4% in a second dataset. Nevertheless, analysis by subject in the second dataset show variations worth to discuss. © European Language Resources Association (ELRA), licensed under CC-BY-NC 4.0.

13.
16th Chinese Conference on Biometric Recognition, CCBR 2022 ; 13628 LNCS:180-188, 2022.
Article in English | Scopus | ID: covidwho-2173744

ABSTRACT

As more and more people begin to wear masks due to current COVID-19 pandemic, existing face recognition systems may encounter severe performance degradation when recognizing masked faces. To figure out the impact of masks on face recognition model, we build a simple but effective tool to generate masked faces from unmasked faces automatically, and construct a new database called Masked LFW (MLFW) based on Cross-Age LFW (CALFW) database. The mask on the masked face generated by our method has good visual consistency with the original face. Moreover, we collect various mask templates, covering most of the common styles appeared in the daily life, to achieve diverse generation effects. Considering realistic scenarios, we design three kinds of combinations of face pairs. The recognition accuracy of SOTA models declines 5%–16% on MLFW database compared with the accuracy on the original images. MLFW database can be viewed and downloaded at http://whdeng.cn/mlfw. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

14.
International Conference on Data Analytics, Intelligent Computing, and Cyber Security, ICDIC 2020 ; 315:339-349, 2023.
Article in English | Scopus | ID: covidwho-2148663

ABSTRACT

The digitalization of human work has been an ever-evolving process. Student’s and employee’s attendance systems are automated by using fingerprint biometrics. Specifically covid situation has created the need for touchless attendance system. Many institutions have already implemented a face detection-based attendance system. However, the major problem in designing face-recognising biometric applications is the scalability and accuracy in time to differentiate between multiple faces from a single clip/image. This paper used the OpenFace model for face recognition and developed a multi-face recognition model. The Torch and Python deployment module of deep neural network-based face recognition was used, and it was predicated accurately in time. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

15.
14th International Conference on Digital Image Processing, ICDIP 2022 ; 12342, 2022.
Article in English | Scopus | ID: covidwho-2137326

ABSTRACT

Masked face recognition becomes an important issue of prevention and monitor in outbreak of COVID-19. Due to loss of facial features caused by masks, unmasked face recognition could not identify the specific person well. Current masked faces methods focus on local features from the unmasked regions or recover masked faces to fit standard face recognition models. These methods only focus on partial information of faces thus these features are not robust enough to deal with complex situations. To solve this problem, we propose a joint feature aggregation method for robust masked face recognition. Firstly, we design a multi-module feature extraction network to extract different features, including local module (LM), global module (GM), and recovery module (RM). Our method not only extracts global features from the original masked faces but also extracts local features from the unmasked area since it is a discriminative part of masked faces. Specially, we utilize a pretrained recovery model to recover masked faces and get some recovery features from the recovered faces. Finally, features from three modules are aggregated as a joint feature of masked faces. The joint feature enhances the feature representation of masked faces thus it is more discriminative and robust than that in previous methods. Experiments show that our method can achieve better performance than previous methods on LFW dataset. © 2022 SPIE.

16.
1st Workshop on User-Centric Narrative Summarization of Long Videos, NarSUM 2022, held in conjunction with the 30th ACM International Conference on Multimedia, MM 2022 ; : 23-29, 2022.
Article in English | Scopus | ID: covidwho-2120704

ABSTRACT

With the worldwide spread of COVID-19, people are trying different ways to prevent the spread of the virus. One of the most useful and popular ways is wearing a face mask. Most people wear a face mask when they go out, which makes facial expression recognition become harder. Thus, how to improve the performance of the facial expression recognition model on masked faces is becoming an important issue. However, there is no public dataset that includes facial expressions with masks. Thus, we built two datasets which are a real-world masked facial expression database (VIP-DB) and a man-made masked facial expression database (M-RAF-DB). To reduce the influence of masks, we utilize contrastive representation learning and propose a two-branches network. We study the influence of contrastive learning on our two datasets. Results show that using contrastive representation learning improves the performance of expression recognition from masked face images. © 2022 ACM.

17.
2022 Asia Conference on Algorithms, Computing and Machine Learning, CACML 2022 ; : 505-511, 2022.
Article in English | Scopus | ID: covidwho-2051936

ABSTRACT

Masked face recognition, a non-contact biometric technology, has attracted much attention and developed rapidly during the coronavirus disease 2019 (COVID-19) outbreak. The existing work trains the masked face recognition model based on a large number of 2D masked face images. However, in practical application scenarios, it is difficult to obtain a large number of masked face images in a short period of time. Therefore, combined with 3D face recognition technology, this paper proposes a masked face recognition model trained with non-masked face images. In this paper, we locate and segment the complete face region and the face region not occluded by masks from the face point clouds. The geometric features of the 3D face surface, namely depth, azimuth, and elevation, are extracted from the above two regions to generate training data. The proposed masked face recognition model based on vision Transformer divides the complete faces and part of the faces into sequence images, and then captures the relationship between the image slices to compensate for the impact caused by the lack of face information, thereby improving the recognition performance. Comparative experiments with the state-of-the-art masked face recognition work are carried out on four databases. The experimental results show that the recognition accuracy of the proposed model is improved by 9.86% on Bosphorus database, 16.77% on CASIA-3D FaceV1 database, 2.32% on StirlingESRC database, and 34.81% on Ajmal main database, respectively, which verifies the effectiveness and stability of the proposed model. © 2022 IEEE.

18.
IEEE Access ; 10:86222-86233, 2022.
Article in English | Scopus | ID: covidwho-2018605

ABSTRACT

Over the years, the evolution of face recognition (FR) algorithms has been steep and accelerated by a myriad of factors. Motivated by the unexpected elements found in real-world scenarios, researchers have investigated and developed a number of methods for occluded face recognition (OFR). However, due to the SarS-Cov2 pandemic, masked face recognition (MFR) research branched from OFR and became a hot and urgent research challenge. Due to time and data constraints, these models followed different and novel approaches to handle lower face occlusions, i.e., face masks. Hence, this study aims to evaluate the different approaches followed for both MFR and OFR, find linked details about the two conceptually similar research directions and understand future directions for both topics. For this analysis, several occluded and face recognition algorithms from the literature are studied. First, they are evaluated in the task that they were trained on, but also on the other. These methods were picked accordingly to the novelty of their approach, proven state-of-the-art results, and publicly available source code. We present quantitative results on 4 occluded and 5 masked FR datasets, and a qualitative analysis of several MFR and OFR models on the Occ-LFW dataset. The analysis presented, sustain the interoperable deployability of MFR methods on OFR datasets, when the occlusions are of a reasonable size. Thus, solutions proposed for MFR can be effectively deployed for general OFR. © 2022 IEEE.

19.
4th International Conference on Image Processing and Machine Vision, IPMV 2022 ; : 13-21, 2022.
Article in English | Scopus | ID: covidwho-1973911

ABSTRACT

During the coronavirus pandemic, the demand for contactless biometrics technology has promoted the development of masked face recognition. Training a masked face recognition model needs to address two crucial issues: a lack of large-scale realistic masked face datasets and the difficulty of obtaining robust face representations due to the huge difference between complete faces and masked faces. To tackle with the first issue, this paper proposes to train a 3D masked face recognition network with non-masked face images. For the second issue, this paper utilizes the geometric features of 3D face, namely depth, azimuth, and elevation, to represent the face. The inherent advantages of 3D face enhance the stability and practicability of 3D masked face recognition network. In addition, a facial geometry extractor is proposed to highlight discriminative facial geometric features so that the 3D masked face recognition network can take full advantage of the depth, azimuth and elevation information in distinguishing face identities. The experimental results on four public 3D face datasets show that the proposed 3D masked face recognition network improves the accuracy of the masked face recognition, which verifies the feasibility of training the masked face recognition model with non-masked face images. © 2022 ACM.

20.
IEEE Internet of Things Journal ; : 1-1, 2022.
Article in English | Scopus | ID: covidwho-1961407

ABSTRACT

The COVID-19 pandemic has caused a high rate of infection, and thus effective epidemic prevention measures of avoiding the second spread of COVID-19 in hospitals are major challenges for healthcare workers. Hospitals, where medicines are collected, are vulnerable to the rapid spread of COVID-19. Using the remote health monitoring technology of the Internet of Things (IoT) to automatically monitor and record the basic medical information of patients, reduce the workload of healthcare workers, and avoid direct contact with healthcare workers to cause secondary infections is an important research topic. This research proposes a new artificial intelligence solution based on the IoT, replacing existing medicine stations and recognizing medicine bags through the state-of-the-art optical character recognition (OCR) model and PP-OCR v2. The use of optical character recognition in identification of medicine bags can replace healthcare workers in data recording. In addition, this research proposes an administrator management and monitoring system to monitor the equipment and provide a mobile application for patients to check the latest status of medicine bags in real time, and record their medication times. The results of the experiments indicate that the recognition model works very well in different conditions (up to 80.76% in PP-OCR v2 and 94.22% in PGNet), which supports both Chinese and English languages. IEEE

SELECTION OF CITATIONS
SEARCH DETAIL